One goal of the OpenMined project is to efficiently train Deep Learning models in a homomorphically encrypted state. Therefore it is very important to benchmark new and existing features in order to achieve better and faster implementations.
Before using the Benchmark Testing Suite, you have to import it from syft.test.benchmark
. After that, you can pass in the function which needs benchmark testing.
In [2]:
from syft.test.benchmark import Benchmark
Benchmark(str)
Out[2]:
The exec_time
method is a very basic tool to calculate a functions' execution time. The method can be used as follows.
In [3]:
import time
def wait_a_second(seconds=3): # Define a function for testing or use an existing one
time.sleep(seconds)
# Call function without params
exec_time = Benchmark(wait_a_second).exec_time()
print("EXECUTION TIME: {} SECONDS".format(exec_time))
So as we see, the function returns the execution time of the function in seconds. Additional params can be added to the function call as follows.
In [3]:
# Call functions with params
exec_time = Benchmark(wait_a_second, seconds=1).exec_time() # Pass function and params to be tested into class
print("(2) EXECUTION TIME: {} SECONDS".format(exec_time))
It is possible to get the execution time per line using the profile_lines()
method.
In [4]:
def some_function(count):
a = 6*8
b = 6**3
c = a + b
x = [a, b, c]
for i in range(count):
a += x[0] * i + x[1] + x [2]
Benchmark(some_function, count=5).profile_lines()